剂量体积直方图(DVH)指标是诊所中广泛接受的评估标准。但是,将这些指标纳入深度学习剂量预测模型,这是由于其非跨性别性和非差异性而具有挑战性的。我们提出了一种基于力矩的新型损失功能,用于预测具有挑战性的常规肺强度调节疗法(IMRT)计划的3D剂量分布。基于力矩的损耗函数是凸面和可区分的,并且可以轻松地将DVH指标合并到没有计算开销的任何深度学习框架中。也可以定制这些矩,以反映3D剂量预测中的临床优先级。例如,使用高阶矩可以在高剂量区域中更好地预测串行结构。我们使用了360的大型数据集(240次培训,50次进行验证,70次进行测试),使用2GY $ \ times $ 30分数的常规肺部患者使用我们机构的临床治疗计划来训练深度学习(DL)模型。我们使用计算机断层扫描(CT),计划目标体积(PTV)和风险风险轮廓(OAR)培训了UNET,例如CNN体系结构,以推断相应的素素3D剂量分布。我们评估了三种不同的损失函数:(1)流行的平均绝对误差(MAE)损失,(2)最近开发的MAE + DVH损失,以及(3)提出的MAE +矩损失。使用不同的DVH指标以及剂量得分和DVH得分比较了预测的质量,该指标最近由AAPM知识的计划大挑战挑战。具有(MAE +力矩)损耗函数的模型通过显着提高DVH得分(11%,p $ <$ 0.01),而具有相似的计算成本,从而超过了MAE损失的模型。它还优于接受(MAE+DVH)训练的模型,它可以显着提高计算成本(48%)和DVH得分(8%,p $ <$ 0.01)。
translated by 谷歌翻译
由于颜色,照明,纹理和镜面反射的变化,光学结肠镜检查(OC)视频帧的自动分析(OC)框架(在OC期间有助于内镜医生)具有挑战性。先前的方法要么通过预处理(使管道变得麻烦)删除其中的一些变化,要么添加带注释(但昂贵且耗时)的多种培训数据。我们提出了CLTS-GAN,这是一种新的深度学习模型,可很好地控制OC视频帧的颜色,照明,纹理和镜面反射合成。我们表明,将这些特定于结肠镜检查的增强添加到训练数据中可以改善最新的息肉检测/分割方法,并推动下一代OC模拟器用于培训医学生。CLTS-GAN的代码和预训练模型可在计算内窥镜平台GitHub(https://github.com/nadeemlab/cep)上获得。
translated by 谷歌翻译
在肺结节表面上的尖锐/肺泡是肺癌恶性肿瘤的良好预测指标,因此是放射科医生的良好预测指标,作为标准化的肺-RADS临床评分标准的一部分。鉴于放射科医生的结节和2D切片评估的3D几何形状,手动调节/肺泡注释是一项繁琐的任务,因此,迄今为止,尚无公共数据集以探测这些临床报告在SOTA恶性预测中的重要性算法。作为本文的一部分,我们释放了一个大规模临床解释的放射线数据集,即Cirdataset,其中包含来自两个公共数据集的分段肺结节的956个放射学家QA/QC'QA/QC'spiculation/lobulation注释,Lidc-Idri(N = 883)(n = 883)(n = 883)(n = 883) lungx(n = 73)。我们还提出了一个基于多级Voxel2mesh扩展到节段结节的端到端深度学习模型(同时保留尖峰),对尖峰进行分类(尖锐/尖峰和弯曲/叶状/叶状)并执行恶性预测。先前的方法已经对LIDC和LUNGX数据集进行了恶性预测,但没有对任何临床报道/可操作的特征(由于已知的超参数敏感性问题,具有一般归因方案)。随着这种全面宣布的Cirdataset和端到端深度学习基线的发布,我们希望恶性预测方法可以验证其解释,对我们的基线进行基准测试,并提供临床上的见解。数据集,代码,预处理模型和Docker容器可在https://github.com/nadeemlab/cir上找到。
translated by 谷歌翻译
The recent increase in public and academic interest in preserving biodiversity has led to the growth of the field of conservation technology. This field involves designing and constructing tools that utilize technology to aid in the conservation of wildlife. In this article, we will use case studies to demonstrate the importance of designing conservation tools with human-wildlife interaction in mind and provide a framework for creating successful tools. These case studies include a range of complexities, from simple cat collars to machine learning and game theory methodologies. Our goal is to introduce and inform current and future researchers in the field of conservation technology and provide references for educating the next generation of conservation technologists. Conservation technology not only has the potential to benefit biodiversity but also has broader impacts on fields such as sustainability and environmental protection. By using innovative technologies to address conservation challenges, we can find more effective and efficient solutions to protect and preserve our planet's resources.
translated by 谷歌翻译
The acquisition of high-quality human annotations through crowdsourcing platforms like Amazon Mechanical Turk (MTurk) is more challenging than expected. The annotation quality might be affected by various aspects like annotation instructions, Human Intelligence Task (HIT) design, and wages paid to annotators, etc. To avoid potentially low-quality annotations which could mislead the evaluation of automatic summarization system outputs, we investigate the recruitment of high-quality MTurk workers via a three-step qualification pipeline. We show that we can successfully filter out bad workers before they carry out the evaluations and obtain high-quality annotations while optimizing the use of resources. This paper can serve as basis for the recruitment of qualified annotators in other challenging annotation tasks.
translated by 谷歌翻译
Achieving artificially intelligent-native wireless networks is necessary for the operation of future 6G applications such as the metaverse. Nonetheless, current communication schemes are, at heart, a mere reconstruction process that lacks reasoning. One key solution that enables evolving wireless communication to a human-like conversation is semantic communications. In this paper, a novel machine reasoning framework is proposed to pre-process and disentangle source data so as to make it semantic-ready. In particular, a novel contrastive learning framework is proposed, whereby instance and cluster discrimination are performed on the data. These two tasks enable increasing the cohesiveness between data points mapping to semantically similar content elements and disentangling data points of semantically different content elements. Subsequently, the semantic deep clusters formed are ranked according to their level of confidence. Deep semantic clusters of highest confidence are considered learnable, semantic-rich data, i.e., data that can be used to build a language in a semantic communications system. The least confident ones are considered, random, semantic-poor, and memorizable data that must be transmitted classically. Our simulation results showcase the superiority of our contrastive learning approach in terms of semantic impact and minimalism. In fact, the length of the semantic representation achieved is minimized by 57.22% compared to vanilla semantic communication systems, thus achieving minimalist semantic representations.
translated by 谷歌翻译
With the evolution of power systems as it is becoming more intelligent and interactive system while increasing in flexibility with a larger penetration of renewable energy sources, demand prediction on a short-term resolution will inevitably become more and more crucial in designing and managing the future grid, especially when it comes to an individual household level. Projecting the demand for electricity for a single energy user, as opposed to the aggregated power consumption of residential load on a wide scale, is difficult because of a considerable number of volatile and uncertain factors. This paper proposes a customized GRU (Gated Recurrent Unit) and Long Short-Term Memory (LSTM) architecture to address this challenging problem. LSTM and GRU are comparatively newer and among the most well-adopted deep learning approaches. The electricity consumption datasets were obtained from individual household smart meters. The comparison shows that the LSTM model performs better for home-level forecasting than alternative prediction techniques-GRU in this case. To compare the NN-based models with contrast to the conventional statistical technique-based model, ARIMA based model was also developed and benchmarked with LSTM and GRU model outcomes in this study to show the performance of the proposed model on the collected time series data.
translated by 谷歌翻译
Boundary conditions (BCs) are important groups of physics-enforced constraints that are necessary for solutions of Partial Differential Equations (PDEs) to satisfy at specific spatial locations. These constraints carry important physical meaning, and guarantee the existence and the uniqueness of the PDE solution. Current neural-network based approaches that aim to solve PDEs rely only on training data to help the model learn BCs implicitly. There is no guarantee of BC satisfaction by these models during evaluation. In this work, we propose Boundary enforcing Operator Network (BOON) that enables the BC satisfaction of neural operators by making structural changes to the operator kernel. We provide our refinement procedure, and demonstrate the satisfaction of physics-based BCs, e.g. Dirichlet, Neumann, and periodic by the solutions obtained by BOON. Numerical experiments based on multiple PDEs with a wide variety of applications indicate that the proposed approach ensures satisfaction of BCs, and leads to more accurate solutions over the entire domain. The proposed correction method exhibits a (2X-20X) improvement over a given operator model in relative $L^2$ error (0.000084 relative $L^2$ error for Burgers' equation).
translated by 谷歌翻译
本文考虑通过模型量化提高联邦学习(FL)的无线通信和计算效率。在提出的Bitwidth FL方案中,Edge设备将其本地FL模型参数的量化版本训练并传输到协调服务器,从而将它们汇总为量化的全局模型并同步设备。目的是共同确定用于本地FL模型量化的位宽度以及每次迭代中参与FL训练的设备集。该问题被视为一个优化问题,其目标是在每卷工具采样预算和延迟要求下最大程度地减少量化FL的训练损失。为了得出解决方案,进行分析表征,以显示有限的无线资源和诱导的量化误差如何影响所提出的FL方法的性能。分析结果表明,两个连续迭代之间的FL训练损失的改善取决于设备的选择和量化方案以及所学模型固有的几个参数。给定基于线性回归的这些模型属性的估计值,可以证明FL训练过程可以描述为马尔可夫决策过程(MDP),然后提出了基于模型的增强学习(RL)方法来优化动作的方法选择迭代。与无模型RL相比,这种基于模型的RL方法利用FL训练过程的派生数学表征来发现有效的设备选择和量化方案,而无需强加其他设备通信开销。仿真结果表明,与模型无RL方法和标准FL方法相比,提出的FL算法可以减少29%和63%的收敛时间。
translated by 谷歌翻译
医疗图像分割有助于计算机辅助诊断,手术和治疗。数字化组织载玻片图像用于分析和分段腺,核和其他生物标志物,这些标志物进一步用于计算机辅助医疗应用中。为此,许多研究人员开发了不同的神经网络来对组织学图像进行分割,主要是这些网络基于编码器编码器体系结构,并且还利用了复杂的注意力模块或变压器。但是,这些网络不太准确地捕获相关的本地和全局特征,并在多个尺度下具有准确的边界检测,因此,我们提出了一个编码器折叠网络,快速注意模块和多损耗函数(二进制交叉熵(BCE)损失的组合) ,焦点损失和骰子损失)。我们在两个公开可用数据集上评估了我们提出的网络的概括能力,用于医疗图像分割Monuseg和Glas,并胜过最先进的网络,在Monuseg数据集上提高了1.99%的提高,而GLAS数据集则提高了7.15%。实施代码可在此链接上获得:https://bit.ly/histoseg
translated by 谷歌翻译